231 research outputs found

    Standardizing data exchange for clinical research protocols and case report forms: An assessment of the suitability of the Clinical Data Interchange Standards Consortium (CDISC) Operational Data Model (ODM)

    Get PDF
    AbstractEfficient communication of a clinical study protocol and case report forms during all stages of a human clinical study is important for many stakeholders. An electronic and structured study representation format that can be used throughout the whole study life-span can improve such communication and potentially lower total study costs. The most relevant standard for representing clinical study data, applicable to unregulated as well as regulated studies, is the Operational Data Model (ODM) in development since 1999 by the Clinical Data Interchange Standards Consortium (CDISC). ODM’s initial objective was exchange of case report forms data but it is increasingly utilized in other contexts. An ODM extension called Study Design Model, introduced in 2011, provides additional protocol representation elements.Using a case study approach, we evaluated ODM’s ability to capture all necessary protocol elements during a complete clinical study lifecycle in the Intramural Research Program of the National Institutes of Health. ODM offers the advantage of a single format for institutions that deal with hundreds or thousands of concurrent clinical studies and maintain a data warehouse for these studies. For each study stage, we present a list of gaps in the ODM standard and identify necessary vendor or institutional extensions that can compensate for such gaps. The current version of ODM (1.3.2) has only partial support for study protocol and study registration data mainly because it is outside the original development goal. ODM provides comprehensive support for representation of case report forms (in both the design stage and with patient level data). Inclusion of requirements of observational, non-regulated or investigator-initiated studies (outside Food and Drug Administration (FDA) regulation) can further improve future revisions of the standard

    Auditing Dynamic Links to Online Information Resources

    Get PDF
    Abstrac

    A critical analysis of COVID-19 research literature: Text mining approach

    Get PDF
    Objective: Among the stakeholders of COVID-19 research, clinicians particularly experience difficulty keeping up with the deluge of SARS-CoV-2 literature while performing their much needed clinical duties. By revealing major topics, this study proposes a text-mining approach as an alternative to navigating large volumes of COVID-19 literature. Materials and methods: We obtained 85,268 references from the NIH COVID-19 Portfolio as of November 21. After the exclusion based on inadequate abstracts, 65,262 articles remained in the final corpus. We utilized natural language processing to curate and generate the term list. We applied topic modeling analyses and multiple correspondence analyses to reveal the major topics and the associations among topics, journal countries, and publication sources. Results: In our text mining analyses of NIH’s COVID-19 Portfolio, we discovered two sets of eleven major research topics by analyzing abstracts and titles of the articles separately. The eleven major areas of COVID-19 research based on abstracts included the following topics: 1) Public Health, 2) Patient Care & Outcomes, 3) Epidemiologic Modeling, 4) Diagnosis and Complications, 5) Mechanism of Disease, 6) Health System Response, 7) Pandemic Control, 8) Protection/Prevention, 9) Mental/Behavioral Health, 10) Detection/Testing, 11) Treatment Options. Further analyses revealed that five (2,3,4,5, and 9) of the eleven abstract-based topics showed a significant correlation (ranked from moderate to weak) with title-based topics. Conclusion: By offering up the more dynamic, scalable, and responsive categorization of published literature, our study provides valuable insights to the stakeholders of COVID-19 research, particularly clinicians.3417985

    A Novel, Species-Specific, Real-Time PCR Assay for the Detection of the Emerging Zoonotic Parasite Ancylostoma Ceylanicum in Human Stool

    Get PDF
    Historically, Ancylostoma ceylanicum has been viewed as an uncommon cause of human hookworm infection, with minimal public health importance. However, recent reports have indicated that this zoonotic hookworm causes a much greater incidence of infection within certain human populations than was previously believed. Current methods for the species-level detection of A. ceylanicum rely on techniques that involve conventional PCR accompanied by restriction enzyme digestions. These PCR-based assays are not only labo- rious but they lack sensitivity as they target suboptimal regions on the DNA. As efforts aimed at the eradication of hookworm disease have grown substantially over the last decade, the need for sensitive and specific tools to monitor and evaluate programmatic successes has correspondingly escalated. Since a growing body of evidence suggests that patient responses to drug treatment can vary based upon the species of hookworm that is causing infection, accurate species-level diagnostics are advantageous. Accordingly, the novel real-time PCR-based assay described here provides a sensitive, species-specific diag- nostic tool that will facilitate the accurate mapping of disease endemicity and will aid in the evaluation of progress of programmatic deworming efforts

    Understanding workflow in telehealth video visits: Observations from the IDEATel project

    Get PDF
    AbstractHome telemedicine is an emerging healthcare paradigm that has the potential to transform the treatment of chronic illness. The purpose of this paper is to: (1) develop a theoretical and methodological framework for studying workflow in telemediated clinician–patient encounters drawing on a distributed cognition approach and (2) employ the framework in an in-depth analysis of workflow in the IDEATel project, a telemedicine program for older adults with diabetes. The methods employed in this research included (a) videotaped observations of 27 nurse–patient encounters and (b) semi-structured interviews with participants. The analyses were used to provide a descriptive analysis of video visits, understand the mediating role of different technologies and to characterize the ways in which artifacts and representations are used to understand the state of the patient. The study revealed barriers to productive use of telehealth technology as well as adaptations that circumvented such limitations. This research has design implications for: (a) improving the coordination of communication and (b) developing tools that better integrate and display information. Although home telemedicine programs will differ in important respects, there are invariant properties across such systems. Explicating these properties can serve as a needs requirement analysis to develop more effective systems and implementation plans

    A study of collaboration among medical informatics research laboratories

    Get PDF
    Abstract The InterMed Collaboratory involves five medical institutions (Stanford University, Columbia University, Brigham and Women's Hospital, Massachusetts General Hospital, and McGill University) whose mandate has been to join in the development of shared infrastructural software, tools, and system components that will facilitate and support the development of diverse, institution-specific applications. Collaboration among geographically distributed organizations with different goals and cultures provides significant challenges. One experimental question, underlying all that InterMed has set out to achieve, is whether modern 0933-3657/98/$19.00 © 1998 Elsevier Science B.V. All rights reserved. PII S 0 9 3 3 -3 6 5 7 ( 9 7 ) 0 0 0 4 5 -6 12 (1998) 97-123 98 communication technologies can effectively bridge such cultural and geographical gaps, allowing the development of shared visions and cooperative activities so that the end results are greater than any one group could have accomplished on its own. In this paper we summarize the InterMed philosophy and mission, describe our progress over 3 years of collaborative activities, and present study results regarding the nature of the evolving collaborative processes, the perceptions of the participants regarding those processes, and the role that telephone conference calls have played in furthering project goals. Both informal introspection and more formal evaluative work, in which project participants became subjects of study by our evaluation experts from McGill, helped to shift our activities from relatively unfocused to more focused efforts while allowing us to understand the facilitating roles that communications technologies could play in our activities. Our experience and study results suggest that occasional face-to-face meetings are crucial precursors to the effective use of distance communications technologies; that conference calls play an important role in both task-related activities and executive (project management) activities, especially when clarifications are required; and that collaborative productivity is highly dependent upon the gradual development of a shared commitment to a well-defined task that leverages the varying expertise of both local and distant colleagues in the creation of tools of broad utility across the participating sites. E.H. Shortliffe et al. / Artificial Intelligence in Medicin

    A Visual Interactive Analytic Tool for Filtering and Summarizing Large Health Data Sets Coded with Hierarchical Terminologies (VIADS).

    Get PDF
    BACKGROUND: Vast volumes of data, coded through hierarchical terminologies (e.g., International Classification of Diseases, Tenth Revision-Clinical Modification [ICD10-CM], Medical Subject Headings [MeSH]), are generated routinely in electronic health record systems and medical literature databases. Although graphic representations can help to augment human understanding of such data sets, a graph with hundreds or thousands of nodes challenges human comprehension. To improve comprehension, new tools are needed to extract the overviews of such data sets. We aim to develop a visual interactive analytic tool for filtering and summarizing large health data sets coded with hierarchical terminologies (VIADS) as an online, and publicly accessible tool. The ultimate goals are to filter, summarize the health data sets, extract insights, compare and highlight the differences between various health data sets by using VIADS. The results generated from VIADS can be utilized as data-driven evidence to facilitate clinicians, clinical researchers, and health care administrators to make more informed clinical, research, and administrative decisions. We utilized the following tools and the development environments to develop VIADS: Django, Python, JavaScript, Vis.js, Graph.js, JQuery, Plotly, Chart.js, Unittest, R, and MySQL. RESULTS: VIADS was developed successfully and the beta version is accessible publicly. In this paper, we introduce the architecture design, development, and functionalities of VIADS. VIADS includes six modules: user account management module, data sets validation module, data analytic module, data visualization module, terminology module, dashboard. Currently, VIADS supports health data sets coded by ICD-9, ICD-10, and MeSH. We also present the visualization improvement provided by VIADS in regard to interactive features (e.g., zoom in and out, customization of graph layout, expanded information of nodes, 3D plots) and efficient screen space usage. CONCLUSIONS: VIADS meets the design objectives and can be used to filter, summarize, compare, highlight and visualize large health data sets that coded by hierarchical terminologies, such as ICD-9, ICD-10 and MeSH. Our further usability and utility studies will provide more details about how the end users are using VIADS to facilitate their clinical, research or health administrative decision making

    A survey of practices for the use of electronic health records to support research recruitment

    Get PDF
    Electronic health records (EHRs) provide great promise for identifying cohorts and enhancing research recruitment. Such approaches are sorely needed, but there are few descriptions in the literature of prevailing practices to guide their use. A multidisciplinary workgroup was formed to examine current practices in the use of EHRs in recruitment and to propose future directions. The group surveyed consortium members regarding current practices. Over 98% of the Clinical and Translational Science Award Consortium responded to the survey. Brokered and self-service data warehouse access are in early or full operation at 94% and 92% of institutions, respectively, whereas, EHR alerts to providers and to research teams are at 45% and 48%, respectively, and use of patient portals for research is at 20%. However, these percentages increase significantly to 88% and above if planning and exploratory work were considered cumulatively. For most approaches, implementation reflected perceived demand. Regulatory and workflow processes were similarly varied, and many respondents described substantive restrictions arising from logistical constraints and limitations on collaboration and data sharing. Survey results reflect wide variation in implementation and approach, and point to strong need for comparative research and development of best practices to protect patients and facilitate interinstitutional collaboration and multisite research

    Sustainability considerations for clinical and translational research informatics infrastructure

    Get PDF
    A robust biomedical informatics infrastructure is essential for academic health centers engaged in translational research. There are no templates for what such an infrastructure encompasses or how it is funded. An informatics workgroup within the Clinical and Translational Science Awards network conducted an analysis to identify the scope, governance, and funding of this infrastructure. After we identified the essential components of an informatics infrastructure, we surveyed informatics leaders at network institutions about the governance and sustainability of the different components. Results from 42 survey respondents showed significant variations in governance and sustainability; however, some trends also emerged. Core informatics components such as electronic data capture systems, electronic health records data repositories, and related tools had mixed models of funding including, fee-for-service, extramural grants, and institutional support. Several key components such as regulatory systems (e.g., electronic Institutional Review Board [IRB] systems, grants, and contracts), security systems, data warehouses, and clinical trials management systems were overwhelmingly supported as institutional infrastructure. The findings highlighted in this report are worth noting for academic health centers and funding agencies involved in planning current and future informatics infrastructure, which provides the foundation for a robust, data-driven clinical and translational research program

    A Systematic Approach to Configuring MetaMap for Optimal Performance

    Get PDF
    Background  MetaMap is a valuable tool for processing biomedical texts to identify concepts. Although MetaMap is highly configurative, configuration decisions are not straightforward. Objective  To develop a systematic, data-driven methodology for configuring MetaMap for optimal performance. Methods  MetaMap, the word2vec model, and the phrase model were used to build a pipeline. For unsupervised training, the phrase and word2vec models used abstracts related to clinical decision support as input. During testing, MetaMap was configured with the default option, one behavior option, and two behavior options. For each configuration, cosine and soft cosine similarity scores between identified entities and gold-standard terms were computed for 40 annotated abstracts (422 sentences). The similarity scores were used to calculate and compare the overall percentages of exact matches, similar matches, and missing gold-standard terms among the abstracts for each configuration. The results were manually spot-checked. The precision, recall, and F-measure ( β =1) were calculated. Results  The percentages of exact matches and missing gold-standard terms were 0.6–0.79 and 0.09–0.3 for one behavior option, and 0.56–0.8 and 0.09–0.3 for two behavior options, respectively. The percentages of exact matches and missing terms for soft cosine similarity scores exceeded those for cosine similarity scores. The average precision, recall, and F-measure were 0.59, 0.82, and 0.68 for exact matches, and 1.00, 0.53, and 0.69 for missing terms, respectively. Conclusion  We demonstrated a systematic approach that provides objective and accurate evidence guiding MetaMap configurations for optimizing performance. Combining objective evidence and the current practice of using principles, experience, and intuitions outperforms a single strategy in MetaMap configurations. Our methodology, reference codes, measurements, results, and workflow are valuable references for optimizing and configuring MetaMap
    • …
    corecore